Now we'll talk about Bayes' Law as applied to continuous random variables, which is most
of the work we will do with Bayes' Law.
The main problem with conditional probabilities for random variables is that our basic definition
is not valid anymore.
Because this definition had this denominator of the event which we condition on.
Now, if we want to condition on something like the random variable taking the value
small y, and if y is distributed according to some density, then the probability that
y actually attains this value of small y is zero.
Because, well, we can only think about events that are x is in some interval.
So if that's i, then the probability of x in i is positive, but with vanishing width
of that interval that will become zero.
And we will, you know, this is something that doesn't make any sense.
So that we can't do.
But there is a generalization which is surprisingly hard to derive.
This is really an interesting topic, but which is completely out of the scope of this lecture.
Now how can we do that?
I will just give you the formulas, and they are as follows.
So let's say that we have again two random variables x and y with a joint distribution
row of x and y.
And we first have to define the marginal density.
Well, we saw what that is in the first lecture of that week, which is integrating out everything
which depends on y.
And then we can write down the density of the variable y conditioned on the event x
is equal to x.
And that is given by the ratio of the joint density in x and y divided by this marginal
row x of x.
And there are a few subtle things here.
The first being that this is the variable.
So y is the variable and x is fixed.
We don't touch x first.
So this is fixed and this is fixed.
So the denominator is a number.
A constant if we only vary y.
And that's something we will see in the computation.
And usually we have to write this explicitly in the form joint density divided by integrating
out the joint density in y direction.
That is the conditional density.
Now we will do a full computation of one example.
This example will incorporate every concept of continuous random variables that we have
seen so far.
And the example goes as follows.
We have two iid gaussians.
So if we look from above on level sets of the joint density, then they are circles.
So it's a 2D gaussian looks like a bell, like a proper bell.
Now we want to compute the distribution of the x random variable conditioned on the event
that x squared plus y squared is equal to one.
So that is the event that the Euclidean norm of x and y of those two random variables is
one.
We can try to think about how that might look, but it's quite hard to draw.
I will just try and fail.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:15:30 Min
Aufnahmedatum
2020-05-14
Hochgeladen am
2020-05-14 23:26:37
Sprache
en-US